Reversed Metaphone using Keras seq2seq

This is a quick demo just for fun, demonstrating a Keras seq2seq model 'translating' from Double Metaphone back to German surnames.

This notebook based on the Keras example found at https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py. The only changes are the data (now a set of 8000 German surnames) and the use of Double Metaphone to transform those surnames.


In [1]:
from keras.models import Model
from keras.layers import Input, LSTM, Dense
import numpy as np

from abydos.phonetic import DoubleMetaphone
dm = DoubleMetaphone()


Using TensorFlow backend.

In [2]:
batch_size = 64  # Batch size for training.
epochs = 100  # Number of epochs to train for.
latent_dim = 256  # Latent dimensionality of the encoding space.
num_samples = 10000  # Number of samples to train on.

The data is the nachnamen.csv file from the tests/corpora directory. Only the first field, which contains the surnames, is used.


In [3]:
data_path = '../tests/corpora/nachnamen.csv'

Below, as each line is read from the file, the first field is retained and the first Double Metaphone encoding is calculated. For training, the Double Metaphone is the input and the original name is the target value.


In [4]:
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, 'r', encoding='utf-8') as f:
    lines = f.read().split('\n')
for line in lines[: min(num_samples, len(lines) - 1)]:
    target_text = line.split(',')[0]
    input_text = dm.encode(target_text)[0]
    # We use "tab" as the "start sequence" character
    # for the targets, and "\n" as "end sequence" character.
    target_text = '\t' + target_text + '\n'
    input_texts.append(input_text)
    target_texts.append(target_text)
    for char in input_text:
        if char not in input_characters:
            input_characters.add(char)
    for char in target_text:
        if char not in target_characters:
            target_characters.add(char)

In [5]:
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])

In [6]:
print('Number of samples:', len(input_texts))
print('Number of unique input tokens:', num_encoder_tokens)
print('Number of unique output tokens:', num_decoder_tokens)
print('Max sequence length for inputs:', max_encoder_seq_length)
print('Max sequence length for outputs:', max_decoder_seq_length)


Number of samples: 9999
Number of unique input tokens: 14
Number of unique output tokens: 60
Max sequence length for inputs: 9
Max sequence length for outputs: 17

In [7]:
input_token_index = dict(
    [(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict(
    [(char, i) for i, char in enumerate(target_characters)])

In [8]:
encoder_input_data = np.zeros(
    (len(input_texts), max_encoder_seq_length, num_encoder_tokens),
    dtype='float32')
decoder_input_data = np.zeros(
    (len(input_texts), max_decoder_seq_length, num_decoder_tokens),
    dtype='float32')
decoder_target_data = np.zeros(
    (len(input_texts), max_decoder_seq_length, num_decoder_tokens),
    dtype='float32')

In [9]:
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
    for t, char in enumerate(input_text):
        encoder_input_data[i, t, input_token_index[char]] = 1.
    for t, char in enumerate(target_text):
        # decoder_target_data is ahead of decoder_input_data by one timestep
        decoder_input_data[i, t, target_token_index[char]] = 1.
        if t > 0:
            # decoder_target_data will be ahead by one timestep
            # and will not include the start character.
            decoder_target_data[i, t - 1, target_token_index[char]] = 1.

In [10]:
# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]


WARNING:tensorflow:From /home/chrislit/fast/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.

In [11]:
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
                                     initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

In [12]:
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

Train and save the model below:


In [13]:
# Run training
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)
# Save model
model.save('s2s.h5')


WARNING:tensorflow:From /home/chrislit/fast/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /home/chrislit/fast/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_grad.py:102: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
Train on 7999 samples, validate on 2000 samples
Epoch 1/100
7999/7999 [==============================] - 7s 923us/step - loss: 1.1949 - val_loss: 1.1601
Epoch 2/100
7999/7999 [==============================] - 8s 966us/step - loss: 0.9861 - val_loss: 0.9788
Epoch 3/100
7999/7999 [==============================] - 7s 923us/step - loss: 0.8069 - val_loss: 0.8244
Epoch 4/100
7999/7999 [==============================] - 7s 915us/step - loss: 0.6871 - val_loss: 0.7573
Epoch 5/100
7999/7999 [==============================] - 7s 914us/step - loss: 0.6026 - val_loss: 0.6744
Epoch 6/100
7999/7999 [==============================] - 7s 874us/step - loss: 0.5388 - val_loss: 0.6082
Epoch 7/100
7999/7999 [==============================] - 8s 965us/step - loss: 0.4954 - val_loss: 0.5774
Epoch 8/100
7999/7999 [==============================] - 8s 1ms/step - loss: 0.4603 - val_loss: 0.5492
Epoch 9/100
7999/7999 [==============================] - 5s 647us/step - loss: 0.4322 - val_loss: 0.5262
Epoch 10/100
7999/7999 [==============================] - 7s 837us/step - loss: 0.4105 - val_loss: 0.5438
Epoch 11/100
7999/7999 [==============================] - 7s 878us/step - loss: 0.3910 - val_loss: 0.5053
Epoch 12/100
7999/7999 [==============================] - 8s 951us/step - loss: 0.3747 - val_loss: 0.4983
Epoch 13/100
7999/7999 [==============================] - 6s 806us/step - loss: 0.3595 - val_loss: 0.4938
Epoch 14/100
7999/7999 [==============================] - 8s 952us/step - loss: 0.3466 - val_loss: 0.4832
Epoch 15/100
7999/7999 [==============================] - 5s 672us/step - loss: 0.3346 - val_loss: 0.4732
Epoch 16/100
7999/7999 [==============================] - 10s 1ms/step - loss: 0.3233 - val_loss: 0.4698
Epoch 17/100
7999/7999 [==============================] - 11s 1ms/step - loss: 0.3131 - val_loss: 0.4702
Epoch 18/100
7999/7999 [==============================] - 8s 1ms/step - loss: 0.3039 - val_loss: 0.4594
Epoch 19/100
7999/7999 [==============================] - 5s 619us/step - loss: 0.2942 - val_loss: 0.4632
Epoch 20/100
7999/7999 [==============================] - 8s 957us/step - loss: 0.2859 - val_loss: 0.4825
Epoch 21/100
7999/7999 [==============================] - 4s 552us/step - loss: 0.2779 - val_loss: 0.4645
Epoch 22/100
7999/7999 [==============================] - 6s 692us/step - loss: 0.2695 - val_loss: 0.4670
Epoch 23/100
7999/7999 [==============================] - 7s 867us/step - loss: 0.2614 - val_loss: 0.4676
Epoch 24/100
7999/7999 [==============================] - 5s 669us/step - loss: 0.2553 - val_loss: 0.4676
Epoch 25/100
7999/7999 [==============================] - 6s 773us/step - loss: 0.2489 - val_loss: 0.4796
Epoch 26/100
7999/7999 [==============================] - 13s 2ms/step - loss: 0.2423 - val_loss: 0.4763
Epoch 27/100
7999/7999 [==============================] - 10s 1ms/step - loss: 0.2358 - val_loss: 0.4795
Epoch 28/100
7999/7999 [==============================] - 7s 833us/step - loss: 0.2297 - val_loss: 0.4829
Epoch 29/100
7999/7999 [==============================] - 8s 959us/step - loss: 0.2246 - val_loss: 0.4743
Epoch 30/100
7999/7999 [==============================] - 4s 520us/step - loss: 0.2190 - val_loss: 0.4844
Epoch 31/100
7999/7999 [==============================] - 6s 743us/step - loss: 0.2135 - val_loss: 0.4874
Epoch 32/100
7999/7999 [==============================] - 5s 586us/step - loss: 0.2091 - val_loss: 0.4947
Epoch 33/100
7999/7999 [==============================] - 6s 793us/step - loss: 0.2056 - val_loss: 0.4997
Epoch 34/100
7999/7999 [==============================] - 6s 739us/step - loss: 0.2005 - val_loss: 0.4920
Epoch 35/100
7999/7999 [==============================] - 6s 719us/step - loss: 0.1962 - val_loss: 0.5075
Epoch 36/100
7999/7999 [==============================] - 6s 805us/step - loss: 0.1926 - val_loss: 0.4997
Epoch 37/100
7999/7999 [==============================] - 5s 658us/step - loss: 0.1892 - val_loss: 0.5018
Epoch 38/100
7999/7999 [==============================] - 9s 1ms/step - loss: 0.1859 - val_loss: 0.5108
Epoch 39/100
7999/7999 [==============================] - 6s 735us/step - loss: 0.1823 - val_loss: 0.5202
Epoch 40/100
7999/7999 [==============================] - 4s 507us/step - loss: 0.1796 - val_loss: 0.5315
Epoch 41/100
7999/7999 [==============================] - 8s 989us/step - loss: 0.1769 - val_loss: 0.5349
Epoch 42/100
7999/7999 [==============================] - 7s 862us/step - loss: 0.1738 - val_loss: 0.5382
Epoch 43/100
7999/7999 [==============================] - 4s 522us/step - loss: 0.1707 - val_loss: 0.5329
Epoch 44/100
7999/7999 [==============================] - 7s 872us/step - loss: 0.1681 - val_loss: 0.5309
Epoch 45/100
7999/7999 [==============================] - 5s 622us/step - loss: 0.1658 - val_loss: 0.5306
Epoch 46/100
7999/7999 [==============================] - 7s 905us/step - loss: 0.1645 - val_loss: 0.5384
Epoch 47/100
7999/7999 [==============================] - 9s 1ms/step - loss: 0.1620 - val_loss: 0.5458
Epoch 48/100
7999/7999 [==============================] - 5s 573us/step - loss: 0.1593 - val_loss: 0.5504
Epoch 49/100
7999/7999 [==============================] - 6s 689us/step - loss: 0.1577 - val_loss: 0.5532
Epoch 50/100
7999/7999 [==============================] - 7s 887us/step - loss: 0.1557 - val_loss: 0.5583
Epoch 51/100
7999/7999 [==============================] - 5s 652us/step - loss: 0.1542 - val_loss: 0.5571
Epoch 52/100
7999/7999 [==============================] - 5s 631us/step - loss: 0.1527 - val_loss: 0.5618
Epoch 53/100
7999/7999 [==============================] - 7s 813us/step - loss: 0.1514 - val_loss: 0.5620
Epoch 54/100
7999/7999 [==============================] - 7s 820us/step - loss: 0.1493 - val_loss: 0.5682
Epoch 55/100
7999/7999 [==============================] - 5s 659us/step - loss: 0.1478 - val_loss: 0.5611
Epoch 56/100
7999/7999 [==============================] - 7s 895us/step - loss: 0.1463 - val_loss: 0.5786
Epoch 57/100
7999/7999 [==============================] - 6s 803us/step - loss: 0.1457 - val_loss: 0.5708
Epoch 58/100
7999/7999 [==============================] - 6s 748us/step - loss: 0.1442 - val_loss: 0.5782
Epoch 59/100
7999/7999 [==============================] - 8s 988us/step - loss: 0.1436 - val_loss: 0.5775
Epoch 60/100
7999/7999 [==============================] - 5s 579us/step - loss: 0.1417 - val_loss: 0.5942
Epoch 61/100
7999/7999 [==============================] - 10s 1ms/step - loss: 0.1407 - val_loss: 0.5953
Epoch 62/100
7999/7999 [==============================] - 6s 699us/step - loss: 0.1396 - val_loss: 0.5912
Epoch 63/100
7999/7999 [==============================] - 10s 1ms/step - loss: 0.1390 - val_loss: 0.5961
Epoch 64/100
7999/7999 [==============================] - 7s 842us/step - loss: 0.1376 - val_loss: 0.5970
Epoch 65/100
7999/7999 [==============================] - 9s 1ms/step - loss: 0.1371 - val_loss: 0.6013
Epoch 66/100
7999/7999 [==============================] - 7s 837us/step - loss: 0.1359 - val_loss: 0.6051
Epoch 67/100
7999/7999 [==============================] - 7s 900us/step - loss: 0.1351 - val_loss: 0.6040
Epoch 68/100
7999/7999 [==============================] - 8s 1ms/step - loss: 0.1348 - val_loss: 0.6024
Epoch 69/100
7999/7999 [==============================] - 8s 1ms/step - loss: 0.1334 - val_loss: 0.6108
Epoch 70/100
7999/7999 [==============================] - 12s 1ms/step - loss: 0.1335 - val_loss: 0.6104
Epoch 71/100
7999/7999 [==============================] - 7s 857us/step - loss: 0.1326 - val_loss: 0.6196
Epoch 72/100
7999/7999 [==============================] - 5s 572us/step - loss: 0.1322 - val_loss: 0.6115
Epoch 73/100
7999/7999 [==============================] - 7s 907us/step - loss: 0.1317 - val_loss: 0.6113
Epoch 74/100
7999/7999 [==============================] - 6s 759us/step - loss: 0.1311 - val_loss: 0.6229
Epoch 75/100
7999/7999 [==============================] - 9s 1ms/step - loss: 0.1300 - val_loss: 0.6247
Epoch 76/100
7999/7999 [==============================] - 6s 702us/step - loss: 0.1296 - val_loss: 0.6227
Epoch 77/100
7999/7999 [==============================] - 7s 896us/step - loss: 0.1288 - val_loss: 0.6250
Epoch 78/100
7999/7999 [==============================] - 8s 1ms/step - loss: 0.1283 - val_loss: 0.6249
Epoch 79/100
7999/7999 [==============================] - 7s 934us/step - loss: 0.1281 - val_loss: 0.6245
Epoch 80/100
7999/7999 [==============================] - 6s 761us/step - loss: 0.1277 - val_loss: 0.6223
Epoch 81/100
7999/7999 [==============================] - 8s 1ms/step - loss: 0.1263 - val_loss: 0.6368
Epoch 82/100
7999/7999 [==============================] - 5s 645us/step - loss: 0.1261 - val_loss: 0.6311
Epoch 83/100
7999/7999 [==============================] - 7s 829us/step - loss: 0.1260 - val_loss: 0.6272
Epoch 84/100
7999/7999 [==============================] - 6s 762us/step - loss: 0.1253 - val_loss: 0.6353
Epoch 85/100
7999/7999 [==============================] - 10s 1ms/step - loss: 0.1253 - val_loss: 0.6375
Epoch 86/100
7999/7999 [==============================] - 6s 745us/step - loss: 0.1245 - val_loss: 0.6413
Epoch 87/100
7999/7999 [==============================] - 8s 957us/step - loss: 0.1242 - val_loss: 0.6443
Epoch 88/100
7999/7999 [==============================] - 7s 837us/step - loss: 0.1233 - val_loss: 0.6436
Epoch 89/100
7999/7999 [==============================] - 7s 926us/step - loss: 0.1235 - val_loss: 0.6423
Epoch 90/100
7999/7999 [==============================] - 7s 870us/step - loss: 0.1229 - val_loss: 0.6498
Epoch 91/100
7999/7999 [==============================] - 10s 1ms/step - loss: 0.1228 - val_loss: 0.6509
Epoch 92/100
7999/7999 [==============================] - 6s 763us/step - loss: 0.1221 - val_loss: 0.6504
Epoch 93/100
7999/7999 [==============================] - 7s 930us/step - loss: 0.1220 - val_loss: 0.6566
Epoch 94/100
7999/7999 [==============================] - 8s 965us/step - loss: 0.1215 - val_loss: 0.6492
Epoch 95/100
7999/7999 [==============================] - 8s 941us/step - loss: 0.1213 - val_loss: 0.6522
Epoch 96/100
7999/7999 [==============================] - 9s 1ms/step - loss: 0.1212 - val_loss: 0.6508
Epoch 97/100
7999/7999 [==============================] - 4s 558us/step - loss: 0.1208 - val_loss: 0.6533
Epoch 98/100
7999/7999 [==============================] - 6s 738us/step - loss: 0.1199 - val_loss: 0.6577
Epoch 99/100
7999/7999 [==============================] - 7s 839us/step - loss: 0.1200 - val_loss: 0.6568
Epoch 100/100
7999/7999 [==============================] - 6s 757us/step - loss: 0.1192 - val_loss: 0.6532
/home/chrislit/fast/anaconda3/lib/python3.7/site-packages/keras/engine/network.py:877: UserWarning: Layer lstm_2 was passed non-serializable keyword arguments: {'initial_state': [<tf.Tensor 'lstm_1/while/Exit_2:0' shape=(?, 256) dtype=float32>, <tf.Tensor 'lstm_1/while/Exit_3:0' shape=(?, 256) dtype=float32>]}. They will not be included in the serialized model (and thus will be missing at deserialization time).
  '. They will not be included '

In [14]:
# Next: inference mode (sampling).
# Here's the drill:
# 1) encode input and retrieve initial decoder state
# 2) run one step of decoder with this initial state
# and a "start of sequence" token as target.
# Output will be the next target token
# 3) Repeat with the current target token and current states

# Define sampling models
encoder_model = Model(encoder_inputs, encoder_states)

decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
    decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model(
    [decoder_inputs] + decoder_states_inputs,
    [decoder_outputs] + decoder_states)

In [15]:
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
    (i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
    (i, char) for char, i in target_token_index.items())

In [16]:
def decode_sequence(input_seq):
    # Encode the input as state vectors.
    states_value = encoder_model.predict(input_seq)

    # Generate empty target sequence of length 1.
    target_seq = np.zeros((1, 1, num_decoder_tokens))
    # Populate the first character of target sequence with the start character.
    target_seq[0, 0, target_token_index['\t']] = 1.

    # Sampling loop for a batch of sequences
    # (to simplify, here we assume a batch of size 1).
    stop_condition = False
    decoded_sentence = ''
    while not stop_condition:
        output_tokens, h, c = decoder_model.predict(
            [target_seq] + states_value)

        # Sample a token
        sampled_token_index = np.argmax(output_tokens[0, -1, :])
        sampled_char = reverse_target_char_index[sampled_token_index]
        decoded_sentence += sampled_char

        # Exit condition: either hit max length
        # or find stop character.
        if (sampled_char == '\n' or
           len(decoded_sentence) > max_decoder_seq_length):
            stop_condition = True

        # Update the target sequence (of length 1).
        target_seq = np.zeros((1, 1, num_decoder_tokens))
        target_seq[0, 0, sampled_token_index] = 1.

        # Update states
        states_value = [h, c]

    return decoded_sentence

Finally, some decoded examples are shown below. The decoded sequences are generally German-like surnames (some quite common) and are names that would be encoded to the input sequence by Double Metaphone.


In [17]:
for seq_index in range(100):
    # Take one sequence (part of the training set)
    # for trying out decoding.
    input_seq = encoder_input_data[seq_index: seq_index + 1]
    decoded_sentence = decode_sequence(input_seq)
    print('Input sequence:', input_texts[seq_index])
    print('Decoded sequence:', decoded_sentence)


Input sequence: MLR
Decoded sequence: Mehler

Input sequence: XMT
Decoded sequence: Schmid

Input sequence: XNTR
Decoded sequence: Schneider

Input sequence: FXR
Decoded sequence: Fecher

Input sequence: APR
Decoded sequence: Opper

Input sequence: MR
Decoded sequence: Mayer

Input sequence: AKNR
Decoded sequence: Wagner

Input sequence: PKR
Decoded sequence: Böker

Input sequence: XLS
Decoded sequence: Scholz

Input sequence: HFMN
Decoded sequence: Hofmann

Input sequence: XFR
Decoded sequence: Schaefer

Input sequence: KK
Decoded sequence: Koc

Input sequence: PR
Decoded sequence: Baar

Input sequence: RKTR
Decoded sequence: Richter

Input sequence: KLN
Decoded sequence: Kleen

Input sequence: XRTR
Decoded sequence: Schröder

Input sequence: ALF
Decoded sequence: Wolff

Input sequence: NMN
Decoded sequence: Neumann

Input sequence: XRS
Decoded sequence: Schwarz

Input sequence: SMRMN
Decoded sequence: Zimmermann

Input sequence: PRN
Decoded sequence: Braun

Input sequence: HFMN
Decoded sequence: Hofmann

Input sequence: HRTMN
Decoded sequence: Hartmann

Input sequence: XMT
Decoded sequence: Schmid

Input sequence: KRKR
Decoded sequence: Krüger

Input sequence: XMTS
Decoded sequence: Schmitz

Input sequence: LNJ
Decoded sequence: Lange

Input sequence: ARNR
Decoded sequence: Werner

Input sequence: KRS
Decoded sequence: Krause

Input sequence: MR
Decoded sequence: Mayer

Input sequence: XMT
Decoded sequence: Schmid

Input sequence: MR
Decoded sequence: Mayer

Input sequence: LMN
Decoded sequence: Lühmann

Input sequence: MR
Decoded sequence: Mayer

Input sequence: ALTR
Decoded sequence: Wolter

Input sequence: KNK
Decoded sequence: König

Input sequence: KLR
Decoded sequence: Kohler

Input sequence: HPR
Decoded sequence: Huber

Input sequence: HRMN
Decoded sequence: Hermann

Input sequence: KSR
Decoded sequence: Kaiser

Input sequence: XLS
Decoded sequence: Scholz

Input sequence: PTRS
Decoded sequence: Peters

Input sequence: FKS
Decoded sequence: Fix

Input sequence: LNK
Decoded sequence: Link

Input sequence: MLR
Decoded sequence: Mehler

Input sequence: XLS
Decoded sequence: Scholz

Input sequence: AS
Decoded sequence: Weiss

Input sequence: JNK
Decoded sequence: Jahnke

Input sequence: HN
Decoded sequence: Heyen

Input sequence: KLR
Decoded sequence: Kohler

Input sequence: R0
Decoded sequence: Rath

Input sequence: FJL
Decoded sequence: Vögele

Input sequence: FRNK
Decoded sequence: Frank

Input sequence: XPRT
Decoded sequence: Schubert

Input sequence: FRTRX
Decoded sequence: Friedrich

Input sequence: PRKR
Decoded sequence: Brucker

Input sequence: PK
Decoded sequence: Bock

Input sequence: KN0R
Decoded sequence: Günther

Input sequence: ANKLR
Decoded sequence: Winkler

Input sequence: PMN
Decoded sequence: Buhmann

Input sequence: LRNS
Decoded sequence: Lewerenz

Input sequence: SMN
Decoded sequence: Siemon

Input sequence: ALPRKT
Decoded sequence: Albrecht

Input sequence: XMKR
Decoded sequence: Schuhmacher

Input sequence: ANTR
Decoded sequence: Andrae

Input sequence: KRS
Decoded sequence: Krause

Input sequence: XSTR
Decoded sequence: Schuster

Input sequence: PM
Decoded sequence: Böhm

Input sequence: LTK
Decoded sequence: Lüdke

Input sequence: FRNK
Decoded sequence: Frank

Input sequence: MRTN
Decoded sequence: Martini

Input sequence: FKT
Decoded sequence: Fecht

Input sequence: KRMR
Decoded sequence: Kraemer

Input sequence: JKR
Decoded sequence: Jager

Input sequence: STN
Decoded sequence: Stein

Input sequence: SMR
Decoded sequence: Siemer

Input sequence: KRS
Decoded sequence: Krause

Input sequence: AT
Decoded sequence: Otte

Input sequence: HS
Decoded sequence: Hass

Input sequence: XLT
Decoded sequence: Schuldt

Input sequence: PRNT
Decoded sequence: Brandau

Input sequence: KRF
Decoded sequence: Graf

Input sequence: XRPR
Decoded sequence: Schreiber

Input sequence: HNRX
Decoded sequence: Heinrich

Input sequence: STL
Decoded sequence: Stehle

Input sequence: SKLR
Decoded sequence: Siegler

Input sequence: KN
Decoded sequence: Kiehn

Input sequence: HNSN
Decoded sequence: Heinsohn

Input sequence: ANJL
Decoded sequence: Engel

Input sequence: TTRX
Decoded sequence: Diederich

Input sequence: PX
Decoded sequence: Busch

Input sequence: PL
Decoded sequence: Biehl

Input sequence: KN
Decoded sequence: Kiehn

Input sequence: TMS
Decoded sequence: Thomas

Input sequence: HRN
Decoded sequence: Heeren

Input sequence: PRKMN
Decoded sequence: Brachmann

Input sequence: SR
Decoded sequence: Sari

Input sequence: ALF
Decoded sequence: Wolff

Input sequence: PFFR
Decoded sequence: Pfeiffer

Input sequence: ARNST
Decoded sequence: Ernst